Current Issue : January-March Volume : 2026 Issue Number : 1 Articles : 6 Articles
Software defect prediction (SDP) has emerged as a crucial task in ensuring software quality and reliability. The early and accurate identification of defect-prone modules significantly reduces maintenance costs and improves system performance. In this study, we introduce a novel hybrid model that combines Restricted Boltzmann Machines (RBM) for nonlinear feature extraction with Logistic Regression (LR) for classification. The model is validated across 21 benchmark datasets from the PROMISE and OpenML repositories. We conducted extensive experiments, including analyses of computational complexity and runtime comparisons, to assess performance in terms of accuracy, precision, recall, F1-score, and AUC. The results indicate that the RBM-LR model consistently outperforms baseline LR, as well as other leading classifiers such as Random Forest, XGBoost, and SVM. Statistical significance was affirmed using paired t-tests (p < 0.05). The proposed framework strikes a balance between interpretability and performance, with future work aimed at extending this approach through hybrid deep learning techniques and validation on industrial datasets to enhance scalability....
This paper presents a non-intrusive method for estimating the distance between a camera and a human subject using a monocular vision system and statistically derived interpupillary distance (IPD) values. The proposed approach eliminates the need for individual calibration by utilizing average IPD values based on biological sex, enabling accurate, scalable distance estimation for diverse users. The algorithm, implemented in Python 3.12.11 using the MediaPipe Face Mesh framework, extracts pupil coordinates from facial images and calculates IPD in pixels. A sixth-degree polynomial calibration function, derived from controlled experiments using a uniaxial displacement system, maps pixel-based IPD to realworld distances across three intervals (20–80 cm, 80–160 cm, and 160–240 cm). Additionally, a geometric correction is applied to compensate for in-plane facial rotation. Experimental validation with 26 participants (15 males, 11 females) demonstrates the method’s robustness and accuracy, as confirmed by relative error analysis against ground truth measurements obtained with a Bosch GLM120C laser distance meter. Males exhibited lower relative errors across the intervals (3.87%, 4.75%, and 5.53%), while females recorded higher mean relative errors (6.0%, 6.7%, and 7.27%). The results confirm the feasibility of the proposed method for real-time applications in human–computer interaction, augmented reality, and camera-based proximity sensing....
We developed a low-cost, high-performance gesture recognition system with a dynamic hand gesture recognition technique based on the Transformer model combined with MediaPipe. The technique accurately extracts hand gesture key points. The system was designed with eight primary gestures: swipe up, swipe down, swipe left, swipe right, thumbs up, OK, click, and enlarge. These gestures serve as alternatives to mouse and keyboard operations, simplifying human–computer interaction interfaces to meet the needs of media system control and presentation switching. The experiment results demonstrated that training deep learning models using the Transformer achieved over 99% accuracy, effectively enhancing recognition performance....
The evolution of Human–Computer Interaction (HCI) has laid the foundation for more immersive and dynamic forms of communication between humans and machines. Building on this trajectory, this work introduces a significant advancement in the domain of Human–Robot Manipulation (HRM), particularly in the remote operation of humanoid robots in complex scenarios. We propose the Advanced Manipulation Assistant System (AMAS), a novel manipulation method designed to be low cost, low latency, and highly efficient, enabling real-time, precise control of humanoid robots from a distance. This method addresses critical challenges in current teleoperation systems, such as delayed response, expensive hardware requirements, and inefficient data transmission. By leveraging lightweight communication protocols, optimized sensor integration, and intelligent motion mapping, our system ensures minimal lag and accurate reproduction of human movements in the robot counterpart. In addition to these advantages, AMAS integrates multimodal feedback combining visual and haptic cues to enhance situational awareness, close the control loop, and further stabilize teleoperation. This transition from traditional HCI paradigms to advanced HRM reflects a broader shift toward more embodied forms of interaction, where human intent is seamlessly translated into robotic action. The implications are far-reaching, spanning applications in remote caregiving, hazardous environment exploration, and collaborative robotics. AMAS represents a step forward in making humanoid robot manipulation more accessible, scalable, and practical for real-world deployment....
Humans in a society generally tend to implicitly adhere to the shared social norms established within that culture. Robots operating in a dynamic environment shared with humans are also expected to behave socially to improve their interaction and enhance their likability among humans. Especially when moving into close proximity of their human partners, robots should convey perceived safety and intelligence. In this work, we model human proxemics as robot navigation costs, allowing the robot to exhibit avoidance behavior around humans or to initiate interactions when engagement is required. The proxemic model enhances robot navigation by incorporating human-aware behaviors, treating humans not as mere obstacles but as social agents with personal space preferences. The model of interaction positions estimates suitable locations relative to the target person for the robot to approach when an engagement occurs. Our evaluation on human–robot interaction data and simulation experiments demonstrates the effectiveness of the proposed models in guiding the robot’s avoidance and approaching behaviors toward humans....
Numerous imaging-based methods have been proposed for artifact monitoring and preservation, yet most rely on fixed-angle cameras or robotic platforms, leading to high cost and complexity. In this study, a portable monocular camera pose estimation and calibration framework is presented to capture artifact images from consistent viewpoints over time. The system is implemented on a Raspberry Pi integrated with a controllable three-axis gimbal, enabling untethered operation. Three methodological innovations are proposed. First, ORB feature extraction combined with a quadtree-based distribution strategy is employed to ensure uniform keypoint coverage and robustness under varying illumination conditions. Second, on-device processing is achieved using a Raspberry Pi, eliminating dependence on external power or high-performance hardware. Third, unlike traditional fixed setups or multi-degree-of-freedom robotic arms, real-time, low-cost calibration is provided, maintaining pose alignment accuracy consistently within three pixels. Through these innovations, a technically robust, computationally efficient, and highly portable solution for artifact preservation has been demonstrated, making it suitable for deployment in museums, exhibition halls, and other resource-constrained environments....
Loading....